



i 0 1 2 3 4 5 6 7 8 f

make lists for both forward and backward
each element of forwad indicates a one step probability : T*O
 f0 f1 f2 f3 f4 b5 b6 b7 b8 b9

 f0 = T*O
 f1 = T*O (f0!=f1, O!=O))

each element of backward indicate a one step probability : T*O

 b0 = T*O
 b1 = T*O (b0!=b1, O'!=O')


 f_t0 = f0*f1*f2*f3*f4
 b_t0 = b5*b6*b7*b8*b9

 f_t0*b_t0 = y5_t0

 f_t1 = f_t0/f0*f5
 b_t1 = b_t0/b5*b10


 T = (t11 t12
 	  t21 t22)

 
 O = (o1 0
 	  0  o2)


 0 1 2 3 4
 0 1 2  2 3 4


 0 1 2 3
 0 1  1 2 3




last input last entry
ex)
S1 := 1 1
S2 := 1 -1
S3 := -1 1
S4 := -1 -1
if -1 is input @ S1, state goes to S2
if 1 is input @ S3, state goes to S1


Q) WHY DFE and FWD has same performance?
 error propogation?


BCEwithLogits = BCE(Sigmoid(y), Sigmoid(pred))





< SNR VS. BER plot for FwdBwdNEQ >
1. Training with specific SNR
2. Save NN parameter 
3. Load NN param and test NN for variable SNR


< Training SNR VS. performance >
1. FwdBwd algorithm decodes code based on known gaussian distribution.
2. For example, if small gaussian noise is assumed by decoder, it excludes abnormal case only caused by large noise.
3. On the other hand, if large noise is assumed by decoder, it takes count for abnormally sparked case.
4. 
